# Long Text Generation
Schreiber Mistral Nemo 12B
Apache-2.0
Schreiber-mistral-nemo-12B is a large language model fine-tuned based on mistral-nemo-kartoffel-12B, focusing on providing more powerful and accurate language processing capabilities.
Large Language Model
Transformers

S
nbeerbower
107
1
Qwen3 0.6B Unsloth Bnb 4bit
Apache-2.0
Qwen3 is the latest generation of the Qwen series large language model, offering a comprehensive set of dense and mixture-of-experts (MoE) models. Based on extensive training, Qwen3 achieves groundbreaking progress in reasoning, instruction following, agent capabilities, and multilingual support.
Large Language Model
Transformers English

Q
unsloth
50.36k
7
Magnum V4 27b Gguf
A dialogue model fine-tuned based on Gemma 27b, aiming to replicate the text quality of Claude 3, supporting ChatML format for conversational interactions.
Large Language Model English
M
anthracite-org
1,220
31
Qwen2.5 1.5B Instruct GGUF
Apache-2.0
Qwen2.5 is the latest series of the Qwen large language model, featuring a 1.5B parameter instruction-tuned model that supports multilingual and long text generation.
Large Language Model English
Q
Mungert
556
4
Badger Writer Llama 3 8b
Badger Writer is a normalized Fourier task superposition model based on multiple Llama 3 8B models, specializing in text generation tasks, particularly excelling in creative writing and instruction following.
Large Language Model
Transformers

B
maldv
106
10
Zhi Writing Dsr1 14b
Apache-2.0
A creative writing enhancement model fine-tuned and optimized based on DeepSeek-R1-Distill-Qwen-14B, showing significant improvements in creative writing
Large Language Model
Transformers Supports Multiple Languages

Z
Zhihu-ai
133
16
Gemma 3 4B It Qat GGUF
The Gemma 3 4B IT model by Google supports multimodal input and long-context processing, suitable for text generation and image understanding tasks.
Image-to-Text
G
lmstudio-community
46.55k
10
JEE 14B
This model is a text generation model fine-tuned based on Qwen2.5-14B-Instruct and trained using the TRL library.
Large Language Model
Transformers

J
ruh-ai
475
4
Qwen2.5 0.5B Instruct
Apache-2.0
A 0.5B parameter instruction fine-tuned model designed for the Gensyn reinforcement learning group, supporting local fine-tuning training
Large Language Model
Transformers English

Q
Gensyn
2.4M
5
Trillion 7B Preview AWQ
Apache-2.0
The Trillion-7B Preview is a multilingual large language model supporting English, Korean, Japanese, and Chinese. It outperforms other 7B-scale models in computational efficiency and performance.
Large Language Model Supports Multiple Languages
T
trillionlabs
22
4
T3Q Qwen2.5 14b V1.2 E2
Apache-2.0
T3Q-qwen2.5-14b-v1.2-e2 is a post-trained version based on the Qwen/Qwen2.5-14B-Instruct-1M model, using LoRA-8-4-0.0001-cosine-32-16 configuration and trained on train_data_v1.2.
Large Language Model
Transformers Supports Multiple Languages

T
JungZoona
119
8
T3Q Qwen2.5 14b V1.0 E3 Q4 K M GGUF
Apache-2.0
This is a quantized model based on Qwen2.5-14B-Instruct-1M, converted to GGUF format, suitable for the llama.cpp framework.
Large Language Model Supports Multiple Languages
T
Sangto
1,126
4
Sombrero QwQ 32B Elite11
Apache-2.0
A large language model optimized based on Qwen's QwQ 32B architecture, focusing on efficient memory utilization, programming assistance, and complex problem-solving.
Large Language Model
Transformers English

S
prithivMLmods
1,201
8
Hiber Multi 10B Instruct
Hiber-Multi-10B-Instruct is an advanced multilingual large language model based on Transformer architecture, supporting multiple languages with 10 billion parameters, suitable for text generation tasks.
Large Language Model
Transformers Supports Multiple Languages

H
Hibernates
86
2
Allam 7B Instruct Preview
Apache-2.0
ALLaM is a large Arabic language model developed by the Saudi Data and Artificial Intelligence Authority (SDAIA), supporting both Arabic and English, trained from scratch with 7 billion parameters.
Large Language Model
Transformers Supports Multiple Languages

A
ALLaM-AI
8,686
109
ARWKV R1 7B
Apache-2.0
A pure RNN-based 7B parameter model trained via knowledge distillation, showcasing RWKV-7's efficient recurrent mechanism and attention-free architecture.
Large Language Model
Transformers Supports Multiple Languages

A
RWKV-Red-Team
113
10
Rwkv7 1.5B World
Apache-2.0
The RWKV-7 model adopts a flash linear attention architecture and supports multilingual text generation tasks.
Large Language Model
Transformers Supports Multiple Languages

R
fla-hub
632
9
Llama 3.3 70B Instruct Abliterated Finetuned GPTQ Int8
This is the GPTQ 8-bit quantized version of the Llama-3.3-70B-Instruct model, fine-tuned and optimized for conversational reasoning tasks.
Large Language Model
Transformers Supports Multiple Languages

L
huihui-ai
7,694
12
Bamba 9B V1
Apache-2.0
Bamba-9B is a decoder-only language model based on the Mamba-2 architecture, trained in two stages, excelling in a wide range of text generation tasks.
Large Language Model
B
ibm-ai-platform
16.19k
35
Magnum V4 123b Gguf
Other
A 123B-parameter model fine-tuned based on Mistral-Large-Instruct, aiming to reproduce Claude 3's text generation quality
Large Language Model English
M
anthracite-org
380
7
Opus V1.2 Llama 3 8b
A Llama 3 8B model optimized for story creation and roleplay, supporting controlled narratives and interactive experiences
Large Language Model
Transformers English

O
dreamgen
21
53
Wizardlm 2 8x22B GGUF
Apache-2.0
WizardLM-2-8x22B-GGUF is the GGUF quantized version of Microsoft's WizardLM-2-8x22B model, supporting multiple bit quantizations, suitable for text generation tasks.
Large Language Model
W
MaziyarPanahi
9,720
127
Goku 8x22B V0.1
Apache-2.0
A multilingual large model fine-tuned based on Mixtral-8x22B-v0.1, with a total of 141B parameters and 35B activated parameters
Large Language Model
Transformers Supports Multiple Languages

G
MaziyarPanahi
35
9
Rwkv 5 World 7b
Apache-2.0
RWKV-5 Eagle 7B is a 7B-parameter large language model based on the RWKV architecture, supporting Chinese text generation tasks
Large Language Model
Transformers

R
SmerkyG
19
1
Mamba 2.8b Hf
A 2.8 billion parameter language model based on the Mamba architecture, compatible with HuggingFace Transformers library
Large Language Model
Transformers

M
state-spaces
8,731
103
Longalpaca 13B GGUF
LongAlpaca-13B-GGUF is the GGUF quantized version of the Yukang/LongAlpaca-13B model, supporting 2-8 bit quantization options, suitable for local text generation tasks.
Large Language Model
L
MaziyarPanahi
285
3
Opus V1.2 7b 8.0bpw H8 Exl2
A 7B-parameter large language model series specifically designed for controlled story creation and role-playing
Large Language Model
Transformers English

O
LoneStriker
37
2
V5 Eagle 7B HF
Apache-2.0
RWKV-5 Eagle 7B is a large language model with 7B parameters based on the RWKV architecture, supporting Chinese text generation tasks
Large Language Model
Transformers

V
RWKV
6,788
72
Llama2 13B Estopia
Estopia is a model focused on enhancing the quality of dialogue and prose generation under instruction formats, excelling in character cards, detail retention, and guided narratives.
Large Language Model
Transformers

L
KoboldAI
26
20
Mythomist 7b
Other
MythoMist 7B is a highly experimental merged model based on Mistral, utilizing innovative algorithms for active benchmarking to achieve user objectives, specifically targeting the reduction of certain negatively associated vocabulary frequencies.
Large Language Model
Transformers English

M
Gryphe
214
34
Nethena 20B GPTQ
Nethena-20B is a 20B-parameter large language model developed by NeverSleep, suitable for role-playing, emotional interaction, and general purposes.
Large Language Model
Transformers

N
TheBloke
29
7
Mythomax L2 13B AWQ
Other
The AWQ quantized version of MythoMax L2 13B, which can effectively improve inference efficiency.
Large Language Model
Transformers English

M
TheBloke
1,555
11
Pygmalion 2 13B GGUF
Pygmalion 2 13B is a large language model developed by PygmalionAI, based on the LLaMA architecture, specializing in text generation and role-playing dialogue tasks.
Large Language Model English
P
TheBloke
2,598
29
Ziya Writing LLaMa 13B V1
Gpl-3.0
Ziya Writing Large Model V1 is a 13-billion-parameter instruction fine-tuned model based on LLaMa, specializing in writing tasks such as official documents, speeches, letters, and creative copywriting.
Large Language Model
Transformers Supports Multiple Languages

Z
IDEA-CCNL
23
17
30B Lazarus
The 30B-Lazarus Model is an experimental language model that applies LoRA and model fusion, aiming to enhance model characteristics through superposition while avoiding feature dilution effects.
Large Language Model
Transformers Other

3
CalderaAI
136
119
Pythia 2.8b Deduped Synthetic Instruct
Apache-2.0
An instruction generation model fine-tuned on the deduplicated version of Pythia-2.8B, optimized for synthetic instruction datasets
Large Language Model
Transformers English

P
lambdalabs
46
6
Glm 2b
GLM-2B is a general-purpose language model pre-trained with autoregressive blank filling objectives, supporting various natural language understanding and generation tasks.
Large Language Model
Transformers English

G
THUDM
60
16
Gerpt2
MIT
GerPT2 is a large German language model based on the GPT2 architecture, trained on the CC-100 and German Wikipedia datasets, outperforming similar German GPT2 models.
Large Language Model German
G
benjamin
48
5
Featured Recommended AI Models